iJS CONFERENCE Blog

Managing you Code Quality with Node.js – Intro to Node.js – Part 3

Jul 28, 2021

When developing with Node.js, it is not only important to write the actual code, but also to ensure quality. In addition to conscientious code structuring, this also includes applying code guidelines and executing automated tests.

In the second part of this series we wrote an application written in Node.js that provides an API for managing a simple task list. The API deliberately does not use REST, but only separates writing from reading. Writing operations are based on POST, and reading on GET.

The actual semantics were shifted to the path of the URL, so that the technical intention remains intact and the API is far more comprehensible than if it were to rely solely on the four technical verbs provided in REST. The current expansion stage of the application contains three routes, for noting and checking off tasks, and for listing all unfinished tasks:

  • POST /note-todo
  • POST /tick-off-todo
  • GET /pending-todos

The separation of writing and reading processes continues in the code, and technical designations are also found here. A distinction is made between functions that change the state of the application and functions that read and return the current state. The application implements a simple form of the CQRS design pattern [1], which recommends this separation.

The validation of input using JSON schemas and simple RAM-based data storage is also implemented, but this can easily be replaced with a database-based approach. There is no client yet, but communication with the HTTP API for manual testing can be easily implemented using tools like curl.

The biggest shortcoming in the code is the current lack of structure. Although the HTTP server (./app.js), the API (./lib/getApp.js), and the data storage (./lib/Todos.js) are basically separated, numerous aspects are mixed up, especially in the API. Part of this article addresses how to structure this better. The second problem is that so far, quality insurance has only been manual, which could be significantly accelerated and improved by applying code guidelines and running automated tests.

Apply code policies

Applying code directives is the easiest part. A code directive is a specification of how code should be formatted or what language feature should be used in a particular situation. A simple example would be the specification to indent code generally with two spaces and to always perform comparisons with the type-safe === operator. In both cases, you can try to check this manually on a regular basis. However, this work is time-consuming and error-prone. It is advisable to assign this task to a tool that can perform the check automatically. This tool is called a linter. There are various pointers for JavaScript, with ESLint [2] representing the standard.

Basically, it’s easy to add ESLint to a project [3]. All you need to do is install the eslint module using npm. ESLint is not relevant for the application execution in this case and is only needed at development time, so you can install the module as a so-called devDependency, i.e. as a dependency that is only relevant for development. For this purpose, the parameter –save-dev must be passed to npm when it is called:

$ npm install eslint --save-dev

Now the task is to call ESLint. This is done via the npx tool included in the Node.js installation, which can be used to call executable modules. However, the call to $ npx eslint is disillusioning, as it does nothing. The reason is that ESLint expects a list of files to be checked as a parameter. Since you usually don’t want to specify all files in all subdirectories individually, you can use a so-called glob pattern that describes the desired files. It is important to make sure that you put quotation marks around the pattern, otherwise, ESLint will resolve the shell instead of the paths.

$ npx eslint './**/*.js'

Unfortunately, this call does not produce the desired result either, but at least ESLint indicates that it cannot find any configuration. In fact, it is highly configurable, which means that you can individually define which code directives should be active for each new project. This configuration can be created as a starting point by calling ESLint with the parameter –init, but there is another way. It is not advisable to store a separate ESLint configuration in each project since you will usually want to have a uniform configuration for all projects. After all, the indentation should not be one way in one project and another way in another project, but should always be done in the same way. Therefore ESLint offers the possibility to outsource a configuration into a separate npm module, which can be installed in a project. The configuration file in the project only needs to reference this npm module. Such configurations can vary in complexity. A particularly strict variant is eslint-config-es [4], which can be easily installed:

$ npm install eslint-config-es --save-dev

Afterward, all that is needed is a configuration file called .eslintrc.json, which must be placed in the application’s root directory. In this file, you only need to specify the key extends, which determine which npm module will be used as the starting point for the project’s rules. The eslint-config-es module contains rules for both Node.js and the web browser. In this case, we are dealing with a Node.js application, so the entry should be set as follows:

{
  "extends": "eslint-config-es/node"
}

If you call the code analysis with the already known command, a handful of errors are reported (Listing 1), although they are only minor.

./app.js
  8:35  error  Invalid group length in numeric value  unicorn/numeric-separators-style
 
./lib/Todos.js
  10:20  error  Expected 'this' to be used by class async method 'initialize'  class-methods-use-this
 
./lib/getApp.js
  29:3  error  Expected blank line before this statement  padding-line-between-statements
  32:3  error  Expected blank line before this statement  padding-line-between-statements
 
✖ 4 problems (4 errors, 0 warnings)
  3 errors and 0 warnings potentially fixable with the `--fix` option.

The two error messages regarding the ./ lib/getApp.js file can be easily fixed, here only one blank line is missing in the code for each case. However, the other two cases are more interesting. In the ./app.js file, the line const port = processenv(‘PORT’, 3000); is criticized. The reason is that the ESLint ruleset used requires numbers to be delimited by thousands for better readability, which is possible in JavaScript since ECMAScript 2019 has a Numeric Separator feature [5]. Corrected, the line reads:

const port = processenv('PORT', 3_000);

The message regarding the ./lib/Todos.js file concerns the initialize function, which so far does not contain any code except for a comment:

async initialize () {
  // Intentionally left blank.
}

Here ESLint is bothered by the fact that the function was declared as an instance function, but does not access the instance because the keyword this is not called in the function. From a technical point of view, the criticism is justified, but at this point, it does not make sense to declare the function with static. In this case, the simplest approach is to instruct ESLint to ignore the error locally:

// eslint-disable-next-line class-methods-use-this
async initialize () {
  // Intentionally left blank.
}

However, approach this procedure with caution, because it should be assumed that every error criticized by ESLint has a fixable cause. Muting ESLint should always be a last resort and be accompanied by a comment on why this rule is disabled if the reason is not obvious.

With this, ESLint is almost completely set up, the only thing missing is a way to comfortably call ESLint. The easiest way to do this is to add a scripts section to the ./package.json file, in which you store an analyze command that performs the desired call. The name of the command can be freely chosen, so you could call it lint or eslint:

"scripts": {
  "analyse": "npx eslint './**/*.js'"
}

To invoke this command, just run npm with the run parameter and the name of the command. This makes the code analysis available in an automated way: $ npm run analysis

Structuring the code

After the static code analysis is set up, the next step is to better structure the code [6]. What characterizes well-structured code can be discussed extensively – ultimately, many rules can be traced back to a few basic principles. One essential principle is the Single Responsibility Principle (SRP) [7], which states that a function or class should have only one functional responsibility – and everything that could be reused in another context should be isolated. A good example of this is the separation of the ./app.js and ./lib/getApp.js files. While the first one takes care of the HTTP server, the second one concerns the definition of the API. Theoretically, you could have put both aspects in a common file, but then the resulting file would have more than one responsibility.

There could have been entirely different reasons to adjust the file in the future, such as switching from HTTP to HTTPS or adding a new route. However, since adapting the protocol can be done independently of the API and adding a new route can be done independently of the server, these changes could have gotten in each other’s way. In this respect, the path that is already chosen is the better structured one.

However, this clean separation is not found in the ./lib/getApp.js file, it actually does too much. It not only defines the API, but also contains the definition of each route and the JSON schemas. It is a good idea to separate these aspects. The easiest way to do this is with the JSON schemas, each of which can be extracted into its own file. To keep the ./lib directory manageable, you can also create a new directory for this purpose: ./lib/schemas. This creates the file ./lib/schemas/noteTodo.js, which has the contents shown in Listing 2. It’s similar with the file ./lib/schemas/tickOffTodo.js (Listing 3).

'use strict';
 
const { Value } = require('validate-value');
 
const noteTodo = new Value({
  type: 'object',
  properties: {
    title: { type: 'string', minLength: 1 }
  },
  required: [ 'title' ],
  additionalProperties: false
});
 
module.exports = noteTodo;
'use strict';
 
const { Value } = require('validate-value');
 
const tickOffTodo = new Value({
  type: 'object',
  properties: {
    id: { type: 'string', format: 'uuid' }
  },
  required: [ 'id' ],
  additionalProperties: false
});
 
module.exports = tickOffTodo;

The same way can be used to extract the routes. However, it should be noted that they need access to the todos object, which is currently common to all routes. Since it is not possible to pass your own parameter to an express route, here is a trick. Instead of extracting the actual route function, you extract a function that returns that route function. The outer function can then be parameterized as desired, while the route function can easily access the parameters. For clarity, it is recommended that you create a separate directory, for example, ./lib/routes. Below this, you can create subdirectories, commands and queries, in order to separate write and read accesses cleanly from each other. The file ./lib/route/commands/noteTodo.js contains the function shown in Listing 4.

'use strict';
 
const noteTodoSchema = require('../../schemas/noteTodo');
 
const noteTodo = function ({ todos }) {
  if (!todos) {
    throw new Error('Todos is missing.');
  }
 
  return async (req, res) => {
    if (!noteTodoSchema.isValid(req.body)) {
      return res.status(400).end();
    }
 
    const { title } = req.body;
 
    await todos.noteTodo({ title });
    res.status(200).end();
  };
};
 
module.exports = noteTodo;

The other two routes are defined in the same way. The ./lib/getApp.js file, which is used to define the actual API, becomes much shorter and clearer. Its meaning can be grasped at a glance (Listing 5).

'use strict';
 
const bodyParser = require('body-parser');
const cors = require('cors');
const express = require('express');
const getPendingTodos = require('./routes/queries/getPendingTodos');
const noteTodo = require('./routes/commands/noteTodo');
const tickOffTodo = require('./routes/commands/tickOffTodo');
const Todos = require('./Todos');
 
const getApp = async function () {
  const todos = new Todos();
 
  await todos.initialize();
 
  const app = express();
 
  app.use(cors());
  app.use(bodyParser.json());
 
  // Commands
  app.post('/note-todo', noteTodo({ todos }));
  app.post('/tick-off-todo', tickOffTodo({ todos }));
 
  // Queries
  app.get('/pending-todos', getPendingTodos({ todos }));
 
  return app;
};
 
module.exports = getApp;

Structuring the code way allows you to think in smaller units and abstract more. This makes it easier to maintain and think about the code and enables more efficient testing.

Testing the code

In order to test code, first, you need a testing framework that handles test execution. As with static code analysis, there are numerous options but the de facto standard is Mocha [8]. Alternatively, Jest is increasing in usage [9]. However, the differences are so marginal that you can easily switch from one to the other without having to learn anything new. Mocha, however, has a much longer history and avoids some of the problems that Jest newcomers tend to run into. In order to use Mocha, you must first install it. This is done via npm, where Mocha is also a devDependency, since the test execution is done exclusively at development time:

$ npm install mocha --save-dev

Now you can call Mocha the same way as ESLint, although that doesn’t do too much since there isn’t a single test yet: $ npx mocha. To write a test, you must first create a directory named ./test. This is the default name provided by Mocha, which can be changed in theory, but in practice, it sticks with the suggested default. It is also useful to mirror the lib directory within the test directory. This means that the tests for ./lib/schemas/noteTodo.js can be found in ./test/schemas/noteTodoTests.js. The appended Tests suffix is not mandatory, but it helps quickly narrow down where an error originally came from when reading error messages.

If you want to define a test, you have to call the test function. The name of the test and a callback containing the actual test code must be passed to it. Several tests can be combined in a suite, which works according to the same principle. Therefore, the basic framework for the ./test/schemas/noteTodoTests.js file looks like Listing 6.

'use strict';
 
suite('noteTodo', () => {
  test('returns true if a title is given.', async () => {
    // ...
  });
});

In principle, the wording of the test description is arbitrary. However, the method is that the suite and test name must make a sentence together. This is why a test beginning with a verb in the third person scales very well, even with many tests.

In addition to the terms suite and test, Mocha also supports the variants describe and it. Semantically this makes no difference, it is just a different syntax. Since Mocha assumes by default that you want to work with describe and it, you have to include the –ui parameter in the call and set it to tdd to enable syntax with suite and test. There are some other parameters that should be used in general:

  • –async-only enforces that every callback function of a test must be asynchronous. This prevents unwanted results for code that works asynchronously but is called without await.
  • –bail aborts the test execution after the first failed test. This way you can find errors faster and don’t have to scroll through pages of output in the terminal.
  • –recursive causes Mocha to consider files in the test directory and its subdirectories.

With this, the call to Mocha looks like:

$ npx mocha --async-only --bail --recursive --ui tdd

The result is a message saying that the test has been successfully completed:

noteTodo
  ✓ returns true if a title is given.
 
 
1 passing (4ms)

This is because the test did not run into an error: any test that can be executed without errors is considered successful by Mocha. Before proceeding with the test, it is recommended to make another entry in the scripts section in the package.json file:

"scripts": {
  "analyse": "npx eslint './**/*.js'",
  "test": "npx mocha --async-only --bail --recursive --ui tdd"
}

It’s also practical to define a qa script that combines both steps:

"scripts": {
  "analyse": "npx eslint './**/*.js'",
  "qa": "npm run analyse && npm run test",
  "test": "npx mocha --async-only --bail --recursive --ui tdd"
}

The actual test must verify that the scheme works. This means that the correct input must be checked to ensure that true is really returned as the result. To do this, the test must first be adapted as in Listing 7.

'use strict';
 
const noteTodo = require('../../lib/schemas/noteTodo');
 
suite('noteTodo', () => {
  test('returns true if a title is given.', async () => {
    const isValid = noteTodo.isValid({
      title: 'the native web'
    });
 
    // ???
  });
});

However, the question of how the value isValid can be checked is still open. This requires a so-called assertion module, for example, assertthat [10], which can also be installed via npm as devDependency:

$ npm install assertthat --save-dev

In the test file it is now necessary to import this dependency:

const { assert } = require('assertthat');

Then you can replace the comment in the test by calling assert:

assert.that(isValid).is.true();

Now, if you run the tests, the test is displayed in green again. If you use the call is.false() instead of is.true(), it is easy to prove that the test actually works. The error message says that the value true was returned, but it should have been false:

AssertionError [ERR_ASSERTION]: Expected true to be false.

Of course, one test alone is far from enough, but writing more tests is just drudgery. The earlier you start to add tests to a project, the better quality achieved right from the start  Tests also play a major role in building a well-maintainable structure. Well-testable code is always code that is well-structured, whereas poorly structured code is generally difficult to test. Good structure and simple tests are mutually dependent.

Outlook

This concludes the third part of the series on Node.js. Of course, there is much more to discover, especially in the area of testing when the requirement goes beyond pure unit tests. Another article could be dedicated to the topic of integration tests alone.

The next step is to add more functionality to the application and then run it productively. As with any modern web and cloud application, this should be done with the help of Docker and Kubernetes, although there are some pitfalls to be aware of. These topics will be covered in the fourth part of the series, before a client for the application is created in the fifth and final part. 

The author’s company, the native web GmbH, offers a free video course on Node.js [11] with close to 30 hours of playtime. Episodes 11, 12, and 15 of this video course deal with the topics covered in this article, such as static code analysis and writing unit and integration tests. This course is recommended to anyone interested in more in-depth details.

Links & Literature

[1]https://www.youtube.com/watch?v=k0f3eeiNwRA&t=1s

[2]https://eslint.org

[3]https://www.youtube.com/watch?v=7J6Yg21BOeQ

[4]https://www.npmjs.com/package/eslint-config-es

[5]https://github.com/tc39/proposal-numeric-separator

[6]https://www.youtube.com/watch?v=2tE_1RSmXkQ

[7]https://www.youtube.com/watch?v=3to0t7YQMnU

[8]https://mochajs.org

[9]https://jestjs.io

[10]https://www.npmjs.com/package/assertthat

[11]https://www.thenativeweb.io/learning/techlounge-nodejs

Sign up for the iJS newsletter and stay tuned to the latest JavaScript news!

 

BEHIND THE TRACKS OF iJS

JavaScript Practices & Tools

DevOps, Testing, Performance, Toolchain & SEO

Angular

Best-Practises with Angular

General Web Development

Broader web development topics

Node.js

All about Node.js

React

From Basic concepts to unidirectional data flows